Measurement of Human Trust in a Hybrid Inspection for Varying Error Pattei~ns

نویسندگان

  • Kartik Madhani
  • Mohammad T Khasawneh
  • Sittichai Kaewkuekool
  • Anand K Gramopadhye
چکیده

The emphasis of this research was on the effect of human trust in a hybrid inspection system with varying error patterns. Experiments were conducted using a hybrid inspection task involving four common types of error patterns, and subjects were requested to rate their trust in the system at different stages. Results showed that subjects’ ratings of trust were based on how they perceived the behavior of the computer. However, this rating was not sensitive to the type of error pattern. A significant change in trust was found in all the systems considered for the study. In addition, the results reflect that there was a significant decrease in trust when subjects inspected the assigned experimental system after inspecting the perfect one. Finally, the components of trust that fit into the trust model at each stage of a particular system were determined using the stepwise regression model. INTRODUCTION In its simplest sense, visual inspection is a careful search for the nonconformities in a product (Thapa et al., 1996). Whether performed by humans, machines, or a combination of the two, an inspection system must perform certain functions. The two most central functions, visual search and decisionmaking (Drury, 1978; Sinclair, 1984), have been shown to be the primary determinants of inspection performance (Thapa et al., 1996). With the demand for zero defect products and the requirement of shorter lead-time production, inspection systems have changed from using sampling and human inspectors to 100% inspection and automated systems (Hou et al., 1993). In an automated system, the supervisor’s choice of manual or automatic control can have important consequences on system performance. These systems are designed to run primarily in automatic mode for maximum accuracy and productivity over long periods of time. If the supervisor overrides the automation too frequently or is too hesitant to take manual control, system performance will be compromised, with potentially disastrous consequences (Muir, 1994). Human behavior in such supervisory tasks has always been a key issue for researchers. As a result, a variety of models have been developed to describe the different aspects of supervisory control behavior (e.g. Moray, 1987; Sheridan, 1986). All of these models have tried to construct a comprehensive framework to describe, explain, and predict human behavior in an automated task, with research showing that a key factor in defining human behavior may be the operator’s trust in automation (Muir, 1987; Muir, 1994; Sheridan and Hennessy, 1984; Muir et al., 1996). This idea of trust has been defined in many ways in psychological literature; but most of the definitions are either too narrow or too vague to be tested, or fail explicitly to acknowledge its multidimensional nature (Muir 1994). Barber (1983) defined trust, recognizing its multidimensional character, in terms of a taxonomy of three specific expectations: persistence, technical competence and fiduciary responsibility. Rempel et al. (1985) used this definition to develop a model to measure the trust humans have in machines. Their model was described in hierarchical, fixed stages with the trust at any given level based upon the outcome of the preceding stage. They determined that predictability dominates early in the relationship, followed by dependability and faith. Muir (1994), concluding that Barber’s model provided the broader context and richness of meaning needed to characterize the myriad interactions in a complex supervisory task and Rempel et al.’s provided the dynamic factor needed to predict how trust may change as a result of experience with a system, combined the two to develop a more comprehensive model of trust in automation. Due to the fact that Muir’s model (1 994) was based on models that came directly from interpersonal relationships rather than human-computer relationships, new models, that are more appropriate for the measurement of trust in automated inspection tasks, were developed (Jian et al., 2000; Master et al., 2000). This research used the model recently developed by Master et al. (2000) to determine how the level of trust an operator has in the system affects hybrid inspection performance. This information was supplemented by the questionnaire developed by Jian et al. (2000), which is more general in its measurement of human trust in automation. One important area of concern when considering trust is identifying if an operator’s trust in an automated system is impaired by the occurrence of errors, specifically determining if errors occurring in a particular time frame have an effect on trust. Considering the evolution of trust as an intervening variable in automated systems, questions may be posed to identify human-machine trust under normal operating conditions and changes in trust when the automated system fails to function properly, i.e., when errors occur. Some of the questions that need to be addressed are: a) Do the changes in the pattern of error occurrence have an impact on overall system trust, and if so, what is the relationship? b) Do the components of overall trust, i.e., contribution of individual trust components, change with the type of error pattem?, and c) Which components of the trust model contribute to overall trust? Therefore, the objective of this paper is to examine the changes in trust for the following varying error patterns: a) Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting -2002 PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 46th ANNUAL MEETING -2002 418

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Measurement of human trust in a hybrid inspection system based on signal detection theory measures

Human trust plays an important role in influencing operator’s strategies toward the use of automated systems. Therefore, a study was conducted to measure the effect of human trust in a hybrid inspection system given different types of errors (i.e., false alarms and misses). The study also looked at which of the four dimensions of trust (competence, predictability, reliability and faith) were th...

متن کامل

Measurement of Trust in Hybrid Inspection Systems: Review and Evaluation of Current Methodologies and Future Approach

Recent advances in computer technology have motivated the automation of various functions of the inspection task, an important process in quality control. However, since purely automated inspection has limitations, hybrid systems, ones taking advantage of the superiority of humans in pattern recognition, rational decision-making, and adaptability to new circumstances, are preferable. To achieve...

متن کامل

Correlating Trust and Performance in A Hybrid Inspection Environment

Studies have shown that neither humans nor automation can achieve superior inspection system performance; hence, hybrid inspection systems where humans work in conjunction with machines have attracted research. Over the years, performance of inspection systems has been evaluated using traditional measures of speed, and accuracy. However, human trust plays an important role, influencing the resp...

متن کامل

Providing an optimal model for gaining public trust in the Complaints System and Inspection Agency's announcements

This research seeks to provide an optimal model for gaining public trust in the Complaints System and Inspection Agency's announcements.In this research, a survey method and a researcher-made questionnaire were used. The study population was consist of customers and users of the complaint handling system and the organization's inspection reports in Mazandaran province comprising equal numbers o...

متن کامل

AHP Techniques for Trust Evaluation in Semantic Web

The increasing reliance on information gathered from the web and other internet technologies raise the issue of trust. Through the development of semantic Web, One major difficulty is that, by its very nature, the semantic web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each resource. Each user knows the trustworthiness of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999